Goto

Collaborating Authors

 dark skin


AI recap: The rise of the prompt engineer and biased driverless cars

New Scientist

Could an AI prompt engineer help you get ahead at work? Artificial intelligence is capable of amazing feats, from writing a novel to creating photorealistic art, but it seems that it isn't so good at extracting exactly what we want. It fails to grasp nuance or overcome poorly worded instructions. That has given rise to the new job of "prompt engineer" – people who are skilled at crafting the precise text instructions needed for AI to produce exactly what is needed – often with salaries of upwards of $375,000 a year. This ability to unlock the potential of AI with their "magic voodoo" may seem like a bit of a fad, but New Scientist found that lots of companies find it surprisingly beneficial – at the moment, at least. The question is whether AI will become better at understanding what humans mean and therefore cut out the intermediaries.


A US Agency Rejected Face Recognition--and Landed in Big Trouble

WIRED

In June 2021, Dave Zvenyach, director of a group tasked with improving digital access to US government services, sent a Slack message to his team. He'd decided that Login.gov, which provides a secure way to access dozens of government apps and websites, wouldn't use selfies and face recognition to verify the identity of people creating new accounts. "The benefits of liveness/selfie do not outweigh any discriminatory impact," he wrote, referring to the process of asking users to upload a selfie and photo of their ID so that algorithms can compare the two. Face recognition technology has become more accurate, but many systems have been found to work less reliably for women with dark skin, people who identify as Asian, or people with a nonbinary gender identity. Yet Zvenyach's pronouncement also put Login.gov and US agencies using the service at odds with federal security guidelines.


The Hidden Role of Facial Recognition Tech in Many Arrests

WIRED

In April 2018, Bronx public defender Kaitlin Jackson was assigned to represent a man accused of stealing a pair of socks from a TJ Maxx store. The man said he couldn't have stolen the socks because at the time the theft occurred, he was at a hospital about three-quarters of a mile away, where his son was born about an hour later. Jackson couldn't understand how police had identified and arrested her client months after the theft. She called the Bronx District Attorney's Office, and a prosecutor told her police had identified her client from a security camera photo using facial recognition. A security guard at the store, the only witness to the theft, later told an investigator from her office that police had sent him a mugshot of her client and asked in a text message "Is this the guy?" Jackson calls that tactic "as suggestive as you can get."


The movement to hold AI accountable gains more steam

#artificialintelligence

Algorithms play a growing role in our lives, even as their flaws are becoming more apparent: a Michigan man wrongly accused of fraud had to file for bankruptcy; automated screening tools disproportionately harm people of color who want to buy a home or rent an apartment; Black Facebook users were subjected to more abuse than white users. Other automated systems have improperly rated teachers, graded students, and flagged people with dark skin more often for cheating on tests. Now, efforts are underway to better understand how AI works and hold users accountable. New York's City Council last month adopted a law requiring audits of algorithms used by employers in hiring or promotion. The law, the first of its kind in the nation, requires employers to bring in outsiders to assess whether an algorithm exhibits bias based on sex, race, or ethnicity.


Medical photography is failing patients with darker skin

#artificialintelligence

But Jenna Lester, a dermatologist at the University of California San Francisco, was growing frustrated with the poor quality images she'd receive of her dark-skinned patients. It wasn't just a cosmetic issue -- the bad photos meant darker-skinned people weren't getting the same quality of care. So in January, Lester co-authored a paper in the British Journal of Dermatology that gives a step-by-step guide to photographing skin of color accurately in clinical settings. Lester, who herself is Black, said, "I feel like these issues and my life is constantly me saying, 'Hey, what about us?' 'What about these patients?'" Medical photographs are vital to documenting disease in textbooks and journals and training medical students.


De-biasing bias

#artificialintelligence

Picture a machine learning system that relies on crowdsourced data labelers to help rank music recommendations. Labellers are all different and this difference may manifest in their labels. The answer depends on many things, but one of them is who you are asking. Bias means different things to different people. The other day I watched a very interesting discussion along these lines between a lawyer (Jake Goldenfein) and a data scientist (Danula Hettiachchii). It seemed like my colleagues had fundamentally different ideas about bias.


AI skin cancer diagnoses risk being less accurate for dark skin – study

#artificialintelligence

AI systems being developed to diagnose skin cancer run the risk of being less accurate for people with dark skin, research suggests. The potential of AI has led to developments in healthcare, with some studies suggesting image recognition technology based on machine learning algorithms can classify skin cancers as successfully as human experts. NHS trusts have begun exploring AI to help dermatologists triage patients with skin lesions. But researchers say more needs to be done to ensure the technology benefits all patients, after finding that few freely available image databases that could be used to develop or "train" AI systems for skin cancer diagnosis contain information on ethnicity or skin type. Those that do have very few images of people with dark skin.


Which of these faces is real?

The Japan Times

These people may look familiar. They may look like users you've seen on Facebook, Twitter or Tinder, or maybe people whose product reviews you've read on Amazon. They look stunningly real at first glance, but they do not exist. They were born from the mind of a computer. There are now businesses that sell fake people.


Designed to Deceive: Do These People Look Real to You?

#artificialintelligence

These people may look familiar, like ones you've seen on Facebook or Twitter. Or people whose product reviews you've read on Amazon, or dating profiles you've seen on Tinder. They look stunningly real at first glance. But they do not exist. They were born from the mind of a computer.


Designed to Deceive: Do These People Look Real to You?

#artificialintelligence

These people may look familiar, like ones you've seen on Facebook or Twitter. Or people whose product reviews you've read on Amazon, or dating profiles you've seen on Tinder. They look stunningly real at first glance. But they do not exist. They were born from the mind of a computer.